We introduce the problem of releasing sensitive data under differentialprivacy when the privacy level is subject to change over time. Existing workassumes that privacy level is determined by the system designer as a fixedvalue before sensitive data is released. For certain applications, however,users may wish to relax the privacy level for subsequent releases of the samedata after either a re-evaluation of the privacy concerns or the need forbetter accuracy. Specifically, given a database containing sensitive data, weassume that a response $y_1$ that preserves $\epsilon_{1}$-differential privacyhas already been published. Then, the privacy level is relaxed to $\epsilon_2$,with $\epsilon_2 > \epsilon_1$, and we wish to publish a more accurate response$y_2$ while the joint response $(y_1, y_2)$ preserves $\epsilon_2$-differentialprivacy. How much accuracy is lost in the scenario of gradually releasing tworesponses $y_1$ and $y_2$ compared to the scenario of releasing a singleresponse that is $\epsilon_{2}$-differentially private? Our results show thatthere exists a composite mechanism that achieves \textit{no loss} in accuracy.We consider the case in which the private data lies within $\mathbb{R}^{n}$with an adjacency relation induced by the $\ell_{1}$-norm, and we focus onmechanisms that approximate identity queries. We show that the same accuracycan be achieved in the case of gradual release through a mechanism whoseoutputs can be described by a \textit{lazy Markov stochastic process}. Thisstochastic process has a closed form expression and can be efficiently sampled.Our results are applicable beyond identity queries. To this end, we demonstratethat our results can be applied in several cases, including Google's RAPPORproject, trading of sensitive data, and controlled transmission of private datain a social network.
展开▼